Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life applications, are not well understood. Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation. In this paper, we propose ReCode, a comprehensive robustness evaluation benchmark for code generation models. We customize over 30 transformations specifically for code on docstrings, function and variable names, code syntax, and code format. They are carefully designed to be natural in real-life coding practice, preserve the original semantic meaning, and thus provide multifaceted assessments of a model's robustness performance. With human annotators, we verified that over 90% of the perturbed prompts do not alter the semantic meaning of the original prompt. In addition, we define robustness metrics for code generation models considering the worst-case behavior under each type of perturbation, taking advantage of the fact that executing the generated code can serve as objective evaluation. We demonstrate ReCode on SOTA models using HumanEval, MBPP, as well as function completion tasks derived from them. Interesting observations include: better robustness for CodeGen over InCoder and GPT-J; models are most sensitive to syntax perturbations; more challenging robustness evaluation on MBPP over HumanEval.
translated by 谷歌翻译
Weakly supervised semantic segmentation (WSSS) with image-level labels is a challenging task in computer vision. Mainstream approaches follow a multi-stage framework and suffer from high training costs. In this paper, we explore the potential of Contrastive Language-Image Pre-training models (CLIP) to localize different categories with only image-level labels and without any further training. To efficiently generate high-quality segmentation masks from CLIP, we propose a novel framework called CLIP-ES for WSSS. Our framework improves all three stages of WSSS with special designs for CLIP: 1) We introduce the softmax function into GradCAM and exploit the zero-shot ability of CLIP to suppress the confusion caused by non-target classes and backgrounds. Meanwhile, to take full advantage of CLIP, we re-explore text inputs under the WSSS setting and customize two text-driven strategies: sharpness-based prompt selection and synonym fusion. 2) To simplify the stage of CAM refinement, we propose a real-time class-aware attention-based affinity (CAA) module based on the inherent multi-head self-attention (MHSA) in CLIP-ViTs. 3) When training the final segmentation model with the masks generated by CLIP, we introduced a confidence-guided loss (CGL) to mitigate noise and focus on confident regions. Our proposed framework dramatically reduces the cost of training for WSSS and shows the capability of localizing objects in CLIP. Our CLIP-ES achieves SOTA performance on Pascal VOC 2012 and MS COCO 2014 while only taking 10% time of previous methods for the pseudo mask generation. Code is available at https://github.com/linyq2117/CLIP-ES.
translated by 谷歌翻译
Recent cross-lingual cross-modal works attempt to extend Vision-Language Pre-training (VLP) models to non-English inputs and achieve impressive performance. However, these models focus only on understanding tasks utilizing encoder-only architecture. In this paper, we propose ERNIE-UniX2, a unified cross-lingual cross-modal pre-training framework for both generation and understanding tasks. ERNIE-UniX2 integrates multiple pre-training paradigms (e.g., contrastive learning and language modeling) based on encoder-decoder architecture and attempts to learn a better joint representation across languages and modalities. Furthermore, ERNIE-UniX2 can be seamlessly fine-tuned for varieties of generation and understanding downstream tasks. Pre-trained on both multilingual text-only and image-text datasets, ERNIE-UniX2 achieves SOTA results on various cross-lingual cross-modal generation and understanding tasks such as multimodal machine translation and multilingual visual question answering.
translated by 谷歌翻译
Video-and-language pre-training has shown promising results for learning generalizable representations. Most existing approaches usually model video and text in an implicit manner, without considering explicit structural representations of the multi-modal content. We denote such form of representations as structural knowledge, which express rich semantics of multiple granularities. There are related works that propose object-aware approaches to inject similar knowledge as inputs. However, the existing methods usually fail to effectively utilize such knowledge as regularizations to shape a superior cross-modal representation space. To this end, we propose a Cross-modaL knOwledge-enhanced Pre-training (CLOP) method with Knowledge Regularizations. There are two key designs of ours: 1) a simple yet effective Structural Knowledge Prediction (SKP) task to pull together the latent representations of similar videos; and 2) a novel Knowledge-guided sampling approach for Contrastive Learning (KCL) to push apart cross-modal hard negative samples. We evaluate our method on four text-video retrieval tasks and one multi-choice QA task. The experiments show clear improvements, outperforming prior works by a substantial margin. Besides, we provide ablations and insights of how our methods affect the latent representation space, demonstrating the value of incorporating knowledge regularizations into video-and-language pre-training.
translated by 谷歌翻译
Personalized Federated Learning (PFL) which collaboratively trains a federated model while considering local clients under privacy constraints has attracted much attention. Despite its popularity, it has been observed that existing PFL approaches result in sub-optimal solutions when the joint distribution among local clients diverges. To address this issue, we present Federated Modular Network (FedMN), a novel PFL approach that adaptively selects sub-modules from a module pool to assemble heterogeneous neural architectures for different clients. FedMN adopts a light-weighted routing hypernetwork to model the joint distribution on each client and produce the personalized selection of the module blocks for each client. To reduce the communication burden in existing FL, we develop an efficient way to interact between the clients and the server. We conduct extensive experiments on the real-world test beds and the results show both the effectiveness and efficiency of the proposed FedMN over the baselines.
translated by 谷歌翻译
域适应性(DA)旨在转移标记良好的源域的知识,以促进未标记的目标学习。当转向特定的任务,例如室内(Wi-Fi)本地化时,必须学习跨域回归剂以减轻域移位。本文提出了一种新颖的方法对抗性双向反应器网络(ABRNET),以寻求更有效的跨域回归模型。具体而言,开发了差异的双向试剂架构,以最大化双向试验的差异,以发现远离源分布的不确定目标实例,然后在特征提取器和双回归器之间采用了对抗性训练机制,以产生域内不变的表示。为了进一步弥合大域间隙,设计了一个特定域的增强模块,旨在合成两个源相似和类似的类似中间域,以逐渐消除原始域的不匹配。对两个跨域回归基准的实证研究说明了我们方法解决域自适应回归(DAR)问题的力量。
translated by 谷歌翻译
最近,基于得分的扩散模型在MRI重建中表现出令人满意的性能。这些方法中的大多数都需要大量完全采样的MRI数据作为培训集,有时在实践中很难获得。本文提出了用于MRI重建的完全采样的基于无DATA的分数扩散模型,该模型以不足的采样数据以自我监督的方式学习了完全采样的MR图像。具体而言,我们首先通过贝叶斯深度学习从未采样的数据中推断出完全采样的MR图像分布,然后通过训练分数函数来扰动数据分布并近似其概率密度梯度。利用学到的分数函数为先验,我们可以通过执行条件的Langevin Markov链蒙特卡洛(MCMC)采样来重建MR图像。公共数据集的实验表明,所提出的方法优于现有的自我监督的MRI重建方法,并与常规(完全采样的数据训练)基于得分的扩散方法实现可比性的性能。
translated by 谷歌翻译
通过社交媒体评论预先培训的许多开放域对话模型都可以产生连贯的答复,但在与真实用户互动时会产生引人入胜的答复。这种现象可能主要是由于注释的人类对话的不足以及与人类偏爱的未对准。在本文中,我们提出了一种新颖而有效的方法,以增强开放域聊天机器人,其中有两种人类反馈(包括明确的演示和隐性偏好),并利用了。通过要求注释者选择或修改模型生成的候选响应,Diamante有效地收集了人类证明的响应并构建了中国聊天数据集。为了增强与人类偏好的一致性,Diamante利用数据收集过程中的隐含偏好,并引入了生成评估联合培训。全面的实验表明,Diamante数据集和联合培训范式可以显着提高中国预训练的对话模型的性能。
translated by 谷歌翻译
估计到达时间(ETA)预测时间(也称为旅行时间估计)是针对各种智能运输应用程序(例如导航,路线规划和乘车服务)的基本任务。为了准确预测一条路线的旅行时间,必须考虑到上下文和预测因素,例如空间 - 周期性的互动,驾驶行为和交通拥堵传播的推断。先前在百度地图上部署的ETA预测模型已经解决了时空相互作用(constgat)和驾驶行为(SSML)的因素。在这项工作中,我们专注于建模交通拥堵传播模式以提高ETA性能。交通拥堵的传播模式建模具有挑战性,它需要考虑到随着时间的推移影响区域的影响区域,以及延迟变化随时间变化的累积影响,这是由于道路网络上的流量事件引起的。在本文中,我们提出了一个实用的工业级ETA预测框架,名为Dueta。具体而言,我们基于交通模式的相关性构建了一个对拥堵敏感的图,并开发了一种路线感知图形变压器,以直接学习路段的长距离相关性。该设计使Dueta能够捕获空间遥远但与交通状况高度相关的路段对之间的相互作用。广泛的实验是在从百度地图收集的大型现实世界数据集上进行的。实验结果表明,ETA预测可以从学习的交通拥堵传播模式中显着受益。此外,Dueta已经在Baidu Maps的生产中部署,每天都有数十亿个请求。这表明Dueta是用于大规模ETA预测服务的工业级和强大的解决方案。
translated by 谷歌翻译
最近,未经训练的神经网络(UNNS)显示了在随机采样轨迹上对MR图像重建的令人满意的性能,而无需使用其他全面采样训练数据。但是,现有的基于UNN的方法并未完全使用MR图像物理先验,导致某些常见情况(例如部分傅立叶,常规采样等)的性能差,并且缺乏重建准确性的理论保证。为了弥合这一差距,我们使用特殊设计的UNN提出了一种保障的K空间插值方法,该方法使用特殊设计的UNN,该方法由MR图像的三个物理先验(或K空间数据)驱动,包括稀疏,线圈灵敏度平稳性和相位平滑度。我们还证明,所提出的方法保证了插值K空间数据准确性的紧密界限。最后,消融实验表明,所提出的方法比现有传统方法更准确地表征了MR图像的物理先验。此外,在一系列常用的采样轨迹下,实验还表明,所提出的方法始终优于传统的平行成像方法和现有的UNN,甚至超过了最先进的监督训练的K空间深度学习方法案例。
translated by 谷歌翻译